Multiagent Trust Modeling for Open Network Environments Doctoral Thesis
نویسندگان
چکیده
Trust models are specific knowledge structures designed to gather, hold and share agent’s knowledge about the reliability of its partners: the trustees. These models are updated by observations generated upon direct interaction with the trustees and by receiving the observations or opinions from other agents. The existing trust models are built on the assumption of recurrent and similar interactions of well defined partners. This assumption is valid in traditional application domains for the multiagent systems, but does not hold in the emerging domains of ad-hoc and mobile ad-hoc networks, where the cooperating community, as well as the solved tasks, evolves rapidly. To address this challenge, we have designed an overlay mechanism which can be integrated with most existing trust models, and which allows the agents to reason about the similarity of trusting situations and trustees. This similarity is used to predict the behavior of previously unknown partner, or to select the relevant past experience to predict the reliability of a particular trustee in a specific situation. Furthermore, we propose a specific approach to trust model distribution between a group of cooperative agents, where each agent reasons about different aspects of the trustee’s identity and situation description. In the last part of our work, we propose a specific application of the proposed technique in the domain of network intrusion detection, where we use our approach to integrate the data from several anomaly detection algorithms. The underlying anomaly detection algorithms only work with network traffic statistics, and are typically prone to high levels of false positives, i.e. the classification of legitimate traffic as potentially malicious. The application of collective trust modeling technique alleviates the problem and significantly reduces the error rate measured on real network traffic. In the same time, the application of the technique does not increase the rate of false negatives (malicious traffic misclassified as legitimate) and the performance characteristics of the technique allow its deployment for real-time traffic monitoring.
منابع مشابه
Adapting Reinforcement Learning For Trust: Effective Modeling in Dynamic Environments
In open multiagent systems, agents need to model their environments in order to identify trustworthy agents. Models of the environment should be accurate so that decisions about whom to interact with can be done soundly. Traditional trust models are based on modeling specific properties of agents, such as their expertise or reliability. Building those models requires too many prior interactions...
متن کاملModeling agent trustworthiness with credibility for message recommendation in social networks
This paper presents a framework for multiagent systems trust modeling that reasons about both user credibility and user similarity. Through simulation, we are able to show that our approach works well in social networking environments by presenting messages to users with high predicted benefit.
متن کاملDelegations and Trust
One of the fundamental notions in a multiagent system is that of delegation. Delegation forms the foundation for cooperation and collaboration among the members of a multiagent system. In diverse environments such as those formed by open multiagent systems, the various members constituting the environment are customarily alien to one another. Delegation decisions in such environments are necess...
متن کاملLearning to Act Optimally in Partially Observable Multiagent Settings: (Doctoral Consortium)
My research is focused on modeling optimal decision making in partially observable multiagent environments. I began with an investigation into the cognitive biases that induce subnormative behavior in humans playing games online in multiagent settings, leveraging well-known computational psychology approaches in modeling humans playing a strategic, sequential game. My subsequent work was in a s...
متن کاملLearning Trust in Dynamic Multiagent Environments using HMMs
In open multiagent systems, agents are owned by a variety of stakeholders and can enter and leave the system at any time. Therefore, trust is a fundamental concern in effective interactions which is a key component of such systems. In this paper, we propose a trust model for autonomous agents in mulitagent environments based on hidden Markov models and reinforcement learning. By this combinatio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008